21,570 research outputs found

    Profiling Behavior of Intruders on Enterprise Honeynet: Deployment and Analysis

    Get PDF
    Network and information security continues to be one of the largest areas that require greater attention and improvement over the current state of infrastructure within enterprise information systems. Intruders to enterprise networks are no longer just hacking for fun or to show off their programming skills; rather they are now doing it for profit-making motives. As a result, developing profiles for the behavior of intruders, trespassing upon business information systems within an enterprise networking environment, has become a primary focus of cyber-security research recently. In the proposed on-going project, we deploy a novel honeynet system using advanced virtualization technologies, in order to collect the forensic evidence of an attack, by allowing attackers to interact with compromised computers in a real enterprise network. We then analyze the behavior of intruders in order to investigate and compare their hidden linkages as compared with enterprise networks, and the attacker(s)’ potential group structures, including attributes such as geographic distribution and service communities, thus providing strategies for enterprise-network administrators to stay protected against malicious attacks from external intruders. Preliminary results on the proposed research is very promising, showing intruders’ behaviors over one month were distributed across over 60 different countries, and our work demonstrated that the most popular service intruders like use to interact with is the very HTTP Web itself

    FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation

    Full text link
    Facial expression analysis based on machine learning requires large number of well-annotated data to reflect different changes in facial motion. Publicly available datasets truly help to accelerate research in this area by providing a benchmark resource, but all of these datasets, to the best of our knowledge, are limited to rough annotations for action units, including only their absence, presence, or a five-level intensity according to the Facial Action Coding System. To meet the need for videos labeled in great detail, we present a well-annotated dataset named FEAFA for Facial Expression Analysis and 3D Facial Animation. One hundred and twenty-two participants, including children, young adults and elderly people, were recorded in real-world conditions. In addition, 99,356 frames were manually labeled using Expression Quantitative Tool developed by us to quantify 9 symmetrical FACS action units, 10 asymmetrical (unilateral) FACS action units, 2 symmetrical FACS action descriptors and 2 asymmetrical FACS action descriptors, and each action unit or action descriptor is well-annotated with a floating point number between 0 and 1. To provide a baseline for use in future research, a benchmark for the regression of action unit values based on Convolutional Neural Networks are presented. We also demonstrate the potential of our FEAFA dataset for 3D facial animation. Almost all state-of-the-art algorithms for facial animation are achieved based on 3D face reconstruction. We hence propose a novel method that drives virtual characters only based on action unit value regression of the 2D video frames of source actors.Comment: 9 pages, 7 figure
    • …
    corecore